44 research outputs found

    Improved Runtime Bounds for the Univariate Marginal Distribution Algorithm via Anti-Concentration

    Get PDF
    Unlike traditional evolutionary algorithms which produce offspring via genetic operators, Estimation of Distribution Algorithms (EDAs) sample solutions from probabilistic models which are learned from selected individuals. It is hoped that EDAs may improve optimisation performance on epistatic fitness landscapes by learning variable interactions. However, hardly any rigorous results are available to support claims about the performance of EDAs, even for fitness functions without epistasis. The expected runtime of the Univariate Marginal Distribution Algorithm (UMDA) on OneMax was recently shown to be in O(nλlogλ)\mathcal{O}\left(n\lambda\log \lambda\right) by Dang and Lehre (GECCO 2015). Later, Krejca and Witt (FOGA 2017) proved the lower bound Ω(λn+nlogn)\Omega\left(\lambda\sqrt{n}+n\log n\right) via an involved drift analysis. We prove a O(nλ)\mathcal{O}\left(n\lambda\right) bound, given some restrictions on the population size. This implies the tight bound Θ(nlogn)\Theta\left(n\log n\right) when λ=O(logn)\lambda=\mathcal{O}\left(\log n\right), matching the runtime of classical EAs. Our analysis uses the level-based theorem and anti-concentration properties of the Poisson-Binomial distribution. We expect that these generic methods will facilitate further analysis of EDAs.Comment: 19 pages, 1 figur

    Runtime analyses of univariate estimation of distribution algorithms under linearity, epistasis and deception

    Get PDF
    Estimation of distribution algorithms (EDAs) have been successfully applied to solve many real-world optimisation problems. The algorithms work by building and maintaining probabilistic models over the search space and are widely considered a generalisation of the evolutionary algorithms (EAs). While the theory of EAs has been enriched significantly over the last decades, our understandings of EDAs are sparse and limited. The past few years have seen some progress in this topic, showing competitive performance compared to other EAs on some simple test functions. This thesis studies the so-called univariate EDAs by rigorously analysing their time complexities on different fitness landscapes. Firstly, I show that the algorithms optimise the ONEMAX function as efficiently as the (1+1) EA does. I then investigate the algorithms’ ability to cope with dependencies among decision variables. Despite the independence assumption, the algorithms optimise LEADINGONES – a test function with an epistasis level of (n−1) – using at most O(n2^2) function evaluations under appropriate parameter settings. I also show that if the selection rate μ/λ is above some constant threshold, an exponential runtime is inevitable to optimise the function. Finally, I confirm the common belief that univariate EDAs have difficulties optimising some objective function when deception occurs. By introducing a new test function with a very mild degree of deception, I show that the UMDA takes an exponential runtime unless the selection rate is chosen extremely high, i.e., μ/λ = O(1/μ). This thesis demonstrates that while univariate EDAs may cope well with independence and epistasis in the environment, the algorithms suffer even at a mild level of deception and that researchers might need to adopt multivariate EDAs when facing deceptive objective functions

    On the limitations of the univariate marginal distribution algorithm to deception and where bivariate EDAs might help

    Get PDF
    We introduce a new benchmark problem called Deceptive Leading Blocks (DLB) to rigorously study the runtime of the Univariate Marginal Distribution Algorithm (UMDA) in the presence of epistasis and deception. We show that simple Evolutionary Algorithms (EAs) outperform the UMDA unless the selective pressure μ/λ\mu/\lambda is extremely high, where μ\mu and λ\lambda are the parent and offspring population sizes, respectively. More precisely, we show that the UMDA with a parent population size of μ=Ω(logn)\mu=\Omega(\log n) has an expected runtime of eΩ(μ)e^{\Omega(\mu)} on the DLB problem assuming any selective pressure μλ141000\frac{\mu}{\lambda} \geq \frac{14}{1000}, as opposed to the expected runtime of O(nλlogλ+n3)\mathcal{O}(n\lambda\log \lambda+n^3) for the non-elitist (μ,λ) EA(\mu,\lambda)~\text{EA} with μ/λ1/e\mu/\lambda\leq 1/e. These results illustrate inherent limitations of univariate EDAs against deception and epistasis, which are common characteristics of real-world problems. In contrast, empirical evidence reveals the efficiency of the bi-variate MIMIC algorithm on the DLB problem. Our results suggest that one should consider EDAs with more complex probabilistic models when optimising problems with some degree of epistasis and deception.Comment: To appear in the 15th ACM/SIGEVO Workshop on Foundations of Genetic Algorithms (FOGA XV), Potsdam, German

    Runtime analysis of the univariate marginal distribution algorithm under low selective pressure and prior noise

    Get PDF
    We perform a rigorous runtime analysis for the Univariate Marginal Distribution Algorithm on the LeadingOnes function, a well-known benchmark function in the theory community of evolutionary computation with a high correlation between decision variables. For a problem instance of size nn, the currently best known upper bound on the expected runtime is O(nλlogλ+n2)\mathcal{O}(n\lambda\log\lambda+n^2) (Dang and Lehre, GECCO 2015), while a lower bound necessary to understand how the algorithm copes with variable dependencies is still missing. Motivated by this, we show that the algorithm requires a eΩ(μ)e^{\Omega(\mu)} runtime with high probability and in expectation if the selective pressure is low; otherwise, we obtain a lower bound of Ω(nλlog(λμ))\Omega(\frac{n\lambda}{\log(\lambda-\mu)}) on the expected runtime. Furthermore, we for the first time consider the algorithm on the function under a prior noise model and obtain an O(n2)\mathcal{O}(n^2) expected runtime for the optimal parameter settings. In the end, our theoretical results are accompanied by empirical findings, not only matching with rigorous analyses but also providing new insights into the behaviour of the algorithm.Comment: To appear at GECCO 2019, Prague, Czech Republi

    STUDY ON APPLICABILITY OF THE CONTACT OXIDATION PROCESS IN REMOVAL OF ORGANIC POLLUTANTS FROM TEXTILE WASTEWATER

    Full text link
    Joint Research on Environmental Science and Technology for the Eart

    Level-Based Analysis of the Univariate Marginal Distribution Algorithm

    Get PDF
    Estimation of Distribution Algorithms (EDAs) are stochastic heuristics that search for optimal solutions by learning and sampling from probabilistic models. Despite their popularity in real-world applications, there is little rigorous understanding of their performance. Even for the Univariate Marginal Distribution Algorithm (UMDA) -- a simple population-based EDA assuming independence between decision variables -- the optimisation time on the linear problem OneMax was until recently undetermined. The incomplete theoretical understanding of EDAs is mainly due to lack of appropriate analytical tools. We show that the recently developed level-based theorem for non-elitist populations combined with anti-concentration results yield upper bounds on the expected optimisation time of the UMDA. This approach results in the bound O(nλlogλ+n2)\mathcal{O}(n\lambda\log \lambda+n^2) on two problems, LeadingOnes and BinVal, for population sizes λ>μ=Ω(logn)\lambda>\mu=\Omega(\log n), where μ\mu and λ\lambda are parameters of the algorithm. We also prove that the UMDA with population sizes μO(n)Ω(logn)\mu\in \mathcal{O}(\sqrt{n}) \cap \Omega(\log n) optimises OneMax in expected time O(λn)\mathcal{O}(\lambda n), and for larger population sizes μ=Ω(nlogn)\mu=\Omega(\sqrt{n}\log n), in expected time O(λn)\mathcal{O}(\lambda\sqrt{n}). The facility and generality of our arguments suggest that this is a promising approach to derive bounds on the expected optimisation time of EDAs.Comment: To appear in Algorithmica Journa

    Characteristics of basal gastric juice in Helicobacter pylori-associated gastritis before and after eradication therapy

    Get PDF
    Purpose: To evaluate the characteristics of basal gastric juice in Helicobacter pylori-positive patients before and after Helicobacter pylori eradication therapy. Methods: This was a cross-sectional descriptive study on 150 gastritis patients admitted at the Hospital of Can Tho University of Medicine and Pharmacy. The patients were divided into 2 groups: study group (Helicobacter pylori gastritis patients) and control group (non-Helicobacter pylori gastritis patients). The pH, HCO3- concentration, and activities and concentrations of pepsin, lipase, and amylase were determined before and after treatment in study group. Results: Patients with abnormal gastric juice comprised 76 % of the study population. Mean gastric potential of hydrogen (pH) was 2.31 (range: 1.64 - 7.68), while median concentration of HCO3- was 4.06 mmol/L (range: 0 - 73.04 mmol/L). The concentrations of pepsin, lipase, and amylase were 8.93, 0.93 and 1.38 ppm, respectively. Activities of pepsin, lipase, and amylase were 2.23, 0.28 and 0.04 U/mL, respectively. After the successful eradication of Helicobacter pylori, pH and HCO3- levels decreased, and there were significant differences in activities of pepsin and lipase before and after treatment (p < 0.05). Moreover, the levels of these parameters differed between patients in whom successful eradication was achieved and those in whom eradication failed (p < 0.05). The concentrations and activities of pepsin and lipase were statistically different between pre-treatment and post-treatment stages in both successful and failed Helicobacter pylori eradication categories (p < 0.05). Conclusion: Basal gastric juice differs significantly between Helicobacter pylori-positive and Helicobacter pylori-negative patients. Intragastric ammonia produced by H. pylori may have a role in the increased pH of gastric juice
    corecore